Ongoing advances in synthesis, physical design, and semiconductor implementation technologies have made 10-million-gate system-on-a-chip (SOC) designs physically achievable, and numerous design starts for chips of this size will occur during the next twelve months. Unfortunately, many engineering teams expect to verify the logical function of these monster hardware description language (HDL) designs using the same tools and methodologies that they used ten years ago for 100,000-gate designs. Existing methods have been stretched to the limit-even at 1-million gates, they devour more than 60 percent of the design cycle. Achieving functional verification closure for huge, modern designs requires new tools and new techniques.
Traditional functional verification methodologies are "black-box." That is, they apply stimulus to the inputs of an RTL design and compare the outputs to expected values. Such an end-to-end approach ignores the details of the internal implementation. Black-box methods include directed, pseudo-random, and system-level simulation. These methods are inadequate for thorough functional verification of large SOC designs for two reasons:
Poor observability: The effects of a bug often take so many cycles and require such special sensitization of the circuit to reach the design's outputs that the bug goes undetected by the end-to-end test.
Poor controllability: The stimulus sequence required to trigger a bug is often so complex that test writers can't deliberately generate it. Also, this sequence is often so unlikely that pseudo-random and system-level environments have essentially no chance of producing it.
Both observability and controllability become worse as the design size grows, which is why functional verification of monster SOC designs is such a big problem. Trying to verify a 10-million-gate design with traditional black-box methods is like trying to catch flies in a dark cathedral using a penlight flashlight; each fly is a challenge and there's no way to know when you're done!
"White-box" functional verification tools contrast with black-box tools in that they use detailed information about the design's register-transfer level (RTL) implementation to improve observability and controllability. When used together with simulation, they help reach functional-verification closure quickly and reliably. White-box tools are best applied using a four-step methodology:
- Arrange the internal structures of the RTL design for better observability using a rich library of checkers and assertions. Make sure that all the worry cases and designer assumptions are checked, especially in the most complex areas of the design. Well-placed checkers can catch bugs in these areas immediately, even if the simulation test suite doesn't propagate the effects of the bugs to the outputs. Use checkers that are testbench independent so they won't need to be changed when the RTL modules are re-used in different environments.
- Document and solidify the interfaces of the design (both external and internal) using checkers and monitors that will catch violations of the "interface contracts" as soon as they occur. Partitioned verification is a must at 10-million gates. Use interface monitors that are test-bench independent and that can travel freely with the RTL modules when they are re-used in other designs.
- Grade the design's black-box test suites using a structural coverage metric which measures whether the corner cases for internal design structures and interfaces have been validated. (For example, full and empty are corner cases for a FIFO queue.) Structural coverage correlates directly to bugs-a test suite that misses corner cases for RTL structures almost certainly misses bugs.
- Exercise the design's corner-case behavior directly using semi-formal technology. Semi-formal technology is "smart"; it uses knowledge about the RTL implementation to create the right input sequences to reach the corner cases and trigger bugs that black-box tests can't find.
Semi-formal verification links simulation and formal verification. The same checkers and assertions that watch for bugs in simulation become targets for semi-formal tools. Running all the simulation test suites typically fires some checkers, revealing some of the bugs. Then, semi-formal tools can target the checkers directly, finding the right stimulus to trigger additional bugs. Semi-formal technology is vastly more effective than any black-box simulation method at generating stimulus to make checkers fire.
The same complex protocol monitors are used for both simulation and semi-formal verification. Complex protocol monitors capture all the protocol rules for complex interfaces. In simulation, they report the protocol violations triggered by existing simulation test suites. Then, semi-formal tools can use the monitors to directly target additional protocol violations.
Furthermore, the same input constraints are used for both simulation and semi-formal verification. Semi-formal tools need to avoid illegal stimulus because illegal stimulus may result in false checker firings (false errors).
Exactly the same monitors that are used to check the legality of interface protocols in simulation are simply re-used as constraints by semi-formal tools; the same rules are correct in both worlds.
Traditional black-box simulation, which worked well for designs below a few hundred-thousand gates, is no longer adequate for functional verification of large SOC designs. White-box methodology calls for instrumenting the RTL structures inside the design, adding protocol monitors on the interfaces, grading the black-box test suites, and using semi-formal technology to exercise the design's corner cases. White-box methods and technology can extend simulation and allow design teams to reliably reach functional verification closure for SOC designs consisting of 10-million gates and more.
Curt Widdoes is co-founder and CEO of O-In Design Automation (San Jose). Previously, Widdoes founded Logic Modeling (purchased by Synopsys) and co-founded Valid Logic.